L1/Lp Regularization of Differences

نویسنده

  • Marcel van Gerven
چکیده

In this paper, we introduce L1/Lp regularization of differences as a new regularization approach that can directly regularize models such as the naive Bayes classifier and (autoregressive) hidden Markov models. An algorithm is developed that selects values of the regularization parameter based on a derived stability condition. for the regularized naive Bayes classifier, we show that the method performs comparably to a filtering algorithm based on mutual information for eight datasets that have been selected from the UCI machine learning repository.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lp-regularized optimization by using orthant-wise approach for inducing sparsity

Sparsity induced in the optimized weights effectively works for factorization with robustness to noises and for classification with feature selection. For enhancing the sparsity, L1 regularization is introduced into the objective cost function to be minimized. In general, however, Lp (p<1) regularization leads to more sparse solutions than L1, though Lp regularized problem is difficult to be ef...

متن کامل

Cross-Cultural Differences and Pragmatic Transfer in English and Persian Refusals

This study aimed to examine cross-cultural differences in performing refusal of requests between Persian native speakers (PNSs) and English native speakers (ENSs) in terms of the frequency of the semantic formulas. Also examined in this study was whether Persian EFL learners would transfer their L1 refusal patterns into the L2, and if there would be a relation between their proficiency level an...

متن کامل

Unifying Framework for Fast Learning Rate of Non-Sparse Multiple Kernel Learning

In this paper, we give a new generalization error bound of Multiple Kernel Learning (MKL) for a general class of regularizations. Our main target in this paper is dense type regularizations including lp-MKL that imposes lp-mixed-norm regularization instead of l1-mixed-norm regularization. According to the recent numerical experiments, the sparse regularization does not necessarily show a good p...

متن کامل

Another look at linear programming for feature selection via methods of regularization

We consider statistical procedures for feature selection defined by a family of regularization problems with convex piecewise linear loss functions and penalties of l1 nature. Many known statistical procedures (e.g. quantile regression and support vector machines with l1 norm penalty) are subsumed under this category. Computationally, the regularization problems are linear programming (LP) prob...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008